"Similar representations of emotions across faces and voices": Correction to Kuhn et al. (2017).
نویسندگان
چکیده
Reports an error in "Similar Representations of Emotions Across Faces and Voices" by Lisa Katharina Kuhn, Taeko Wydell, Nadine Lavan, Carolyn McGettigan and Lúcia Garrido (Emotion, Advanced Online Publication, Mar 02, 2017, np). In the article, the copyright attribution was incorrectly listed and the Creative Commons CC-BY license disclaimer was incorrectly omitted from the author note. The correct copyright is "© 2017 The Author(s)" and the omitted disclaimer is below. All versions of this article have been corrected. "This article has been published under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Copyright for this article is retained by the author(s). Author(s) grant(s) the American Psychological Association the exclusive right to publish the article and identify itself as the original publisher." (The following abstract of the original article appeared in record 2017-09406-001.) Emotions are a vital component of social communication, carried across a range of modalities and via different perceptual signals such as specific muscle contractions in the face and in the upper respiratory system. Previous studies have found that emotion recognition impairments after brain damage depend on the modality of presentation: recognition from faces may be impaired whereas recognition from voices remains preserved, and vice versa. On the other hand, there is also evidence for shared neural activation during emotion processing in both modalities. In a behavioral study, we investigated whether there are shared representations in the recognition of emotions from faces and voices. We used a within-subjects design in which participants rated the intensity of facial expressions and nonverbal vocalizations for each of the 6 basic emotion labels. For each participant and each modality, we then computed a representation matrix with the intensity ratings of each emotion. These matrices allowed us to examine the patterns of confusions between emotions and to characterize the representations of emotions within each modality. We then compared the representations across modalities by computing the correlations of the representation matrices across faces and voices. We found highly correlated matrices across modalities, which suggest similar representations of emotions across faces and voices. We also showed that these results could not be explained by commonalities between low-level visual and acoustic properties of the stimuli. We thus propose that there are similar or shared coding mechanisms for emotions which may act independently of modality, despite their distinct perceptual inputs. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
منابع مشابه
Similar Representations of Emotions across Faces and Voices
Emotions are a vital component of social communication, carried across a range of modalities and via different perceptual signals such as specific muscle contractions in the face and in the upper respiratory system. Previous studies have found that emotion recognition impairments after brain damage depend on the modality of presentation: recognition from faces may be impaired whilst recognition...
متن کاملMemory for faces and voices varies as a function of sex and expressed emotion
We investigated how memory for faces and voices (presented separately and in combination) varies as a function of sex and emotional expression (anger, disgust, fear, happiness, sadness, and neutral). At encoding, participants judged the expressed emotion of items in forced-choice tasks, followed by incidental Remember/Know recognition tasks. Results from 600 participants showed that accuracy (h...
متن کاملA unified coding strategy for processing faces and voices
Both faces and voices are rich in socially-relevant information, which humans are remarkably adept at extracting, including a person's identity, age, gender, affective state, personality, etc. Here, we review accumulating evidence from behavioral, neuropsychological, electrophysiological, and neuroimaging studies which suggest that the cognitive and neural processing mechanisms engaged by perce...
متن کاملFunctionally localized representations contain distributed information: insight from simulations of deep convolutional neural networks
Preferential activation to faces in the brain’s fusiform gyrus has led to the proposed existence of a face module termed the Fusiform Face Area (FFA) (Kanwisher et. al, 1997). However, arguments for distributed, topographical object-form representations in FFA and across visual cortex have been proposed to explain data showing that FFA activation patterns contain decodable information about non...
متن کاملCommentary: “Hearing faces and seeing voices”: Amodal coding of person identity in the human brain
In a recent paper, Hasan et al. (2016) report the results of a neuroimaging study on amodal person identity processing. Across multiple testing sessions, five participants were presented with audioonly, video-only and audiovisual stimuli of four familiar people producing the syllable “had.” During the scanning session, participants performed a forced-choice person identification task in respons...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Emotion
دوره 17 6 شماره
صفحات -
تاریخ انتشار 2017